16 research outputs found
Neural networks in geophysical applications
Neural networks are increasingly popular in geophysics.
Because they are universal approximators, these
tools can approximate any continuous function with an
arbitrary precision. Hence, they may yield important
contributions to finding solutions to a variety of geophysical applications.
However, knowledge of many methods and techniques
recently developed to increase the performance
and to facilitate the use of neural networks does not seem
to be widespread in the geophysical community. Therefore,
the power of these tools has not yet been explored to
their full extent. In this paper, techniques are described
for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size
and architecture
Recommended from our members
Neural networks in seismic discrimination
Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described
Recommended from our members
Calibration of the Sonseca array with large magnitude regional and teleseismic events
In order to calibrate the Sonseca station, a 19-element short-period seismic array with a 9 km diameter circular aperture located in central Spain (39.68N, 3.96W), wavefield measurements made on observed seismic phases are compared with expected values. Thirty-five well-recorded regional and teleseismic events are used to study bearing and phase velocity estimation properties. Preliminary results indicate that in general the Sonseca array performs well for both regional and teleseismic events for frequencies less than 5 Hz using standard array signal processing techniques. Main findings of this study are: (1) A systematic bias is observed in bearing estimates; the bias is a function of the true bearing for events from the easterly directions of the array and can be mitigated with a simple bias correction. Using a least-squares quadratic polynomial fit, the bearing estimation error can be reduced to less than two or three degrees. (2) Measured signal and noise coherence functions and beamforming suggest that for regional events improved SNR is obtained by beamforming in the frequency band of 0.5 to 4 Hz with a resulting array gain as high as 10 dB. (3) Because the element spacing of Sonseca array corresponds to that of a sparse regional array, spatial aliasing can be observed in narrowband f-K analysis at the higher frequencies. We compare performance of narrowband and broadband frequency-wavenumber (f-k) analysis and suggest preliminary recipes for f-k and beamforming analysis
Recommended from our members
Earthquake early warning system using real-time signal processing
An earthquake warning system has been developed to provide a time series profile from which vital parameters such as the time until strong shaking begins, the intensity of the shaking, and the duration of the shaking, can be derived. Interaction of different types of ground motion and changes in the elastic properties of geological media throughout the propagation path result in a highly nonlinear function. We use neural networks to model these nonlinearities and develop learning techniques for the analysis of temporal precursors occurring in the emerging earthquake seismic signal. The warning system is designed to analyze the first-arrival from the three components of an earthquake signal and instantaneously provide a profile of impending ground motion, in as little as 0.3 sec after first ground motion is felt at the sensors. For each new data sample, at a rate of 25 samples per second, the complete profile of the earthquake is updated. The profile consists of a magnitude-related estimate as well as an estimate of the envelope of the complete earthquake signal. The envelope provides estimates of damage parameters, such as time until peak ground acceleration (PGA) and duration. The neural network based system is trained using seismogram data from more than 400 earthquakes recorded in southern California. The system has been implemented in hardware using silicon accelerometers and a standard microprocessor. The proposed warning units can be used for site-specific applications, distributed networks, or to enhance existing distributed networks. By producing accurate, and informative warnings, the system has the potential to significantly minimize the hazards of catastrophic ground motion. Detailed system design and performance issues, including error measurement in a simple warning scenario are discussed in detail
Recommended from our members
Towards data fusion in seismic monitoring: Source characterization of mining blasts with acoustic and seismic records
Event identification that combines data from a diverse range of sensor types, such as seismic, hydroacoustic, infrasound, optical, or acoustic sensors, has been discussed recently as a way to improve treaty monitoring technology, especially for a Comprehensive Test Ban Treaty. In this exploratory study the authors compare features in acoustic and seismic data from ripple-fired mining blasts, in an effort to understand the issues of incorporating data fusion into seismic monitoring. They study the possibility of identifying features such as spectral scalloping at high frequencies using acoustic signals recorded in the near field during mining blasts. Recorded acoustic and seismic data from two mining blasts at Carlin, Nevada, were analyzed. The authors have found that there is a clear presence of the periodic and impulsive nature of the ripple-fire source present in the acoustic recordings at high frequencies. They have discovered that the arrival time and duration of the acoustic recordings are also clearly discernible at high frequencies. This is in contrast to the absence of these features in seismic signals, due to attenuation and scattering at high frequencies. The association of signals from different sensors offers solutions for difficult monitoring problems. Seismic or acoustic signals individually may not be able to detect a nuclear test hidden under a typical mining blast. However, the presence of an underground nuclear test during a mining event could be determined by deriving the mining explosion source from the acoustic recording, modeling a seismic signal from the derived source, and subtracting the modeled seismic signal from the seismic recording for the event. Recommendations in the design of data fusion systems for treaty monitoring are suggested
Recommended from our members
Automated Data Processing (ADP) research and development
Monitoring a comprehensive test ban treaty (CTBT) will require screening tens of thousands of seismic events each year. Reliable automated data analysis will be essential in keeping up with the continuous stream of events that a global monitoring network will detect. We are developing automated event location and identification algorithms by looking at the gaps and weaknesses in conventional ADP systems and by taking advantage of modem computational paradigms. Our research focus is on three areas: developing robust algorithms for signal feature extraction, integrating the analysis of critical measurements, and exploiting joint estimation techniques such as using data from acoustic, hydroacoustic, and seismic sensors. We identify several important problems for research and development; e.g., event location with approximate velocity models and event identification in the presence of outliers. We are employing both linear and nonlinear methods and advanced signal transform techniques to solve these event monitoring problems. Our goal is to increase event-interpretation throughput by employing the power and efficiency of modem computational techniques, and to improve the reliability of automated analysis by reducing the rates of false alarms and missed detections
Recommended from our members
Neural networks and wavelet analysis in the computer interpretation of pulse oximetry data
Pulse oximeters determine the oxygen saturation level of blood by measuring the light absorption of arterial blood. The sensor consists of red and infrared light sources and photodetectors. A method based on neural networks and wavelet analysis is developed for improved saturation estimation in the presence of sensor motion. Spectral and correlation functions of the dual channel oximetry data are used by a backpropagation neural network to characterize the type of motion. Amplitude ratios of red to infrared signals as a function of time scale are obtained from the multiresolution wavelet decomposition of the two-channel data. Motion class and amplitude ratios are then combined to obtain a short-time estimate of the oxygen saturation level. A final estimate of oxygen saturation is obtained by applying a 15 s smoothing filter on the short-time measurements based on 3.5 s windows sampled every 1.75 s. The design employs two backpropagation neural networks. The first neural network determines the motion characteristics and the second network determines the saturation estimate. Our approach utilizes waveform analysis in contrast to the standard algorithms that are based on the successful detection of peaks and troughs in the signal. The proposed algorithm is numerically efficient and has stable characteristics with a reduced false alarm rate with a small loss in detection. The method can be rapidly developed on a digital signal processing platform